Goto

Collaborating Authors

 problem problem


A Quick Primer on Self-Play in Deep Reinforcement Learning

#artificialintelligence

"Train tirelessly to defeat the greatest enemy, yourself, and to discover the greatest master, yourself" DeepMind has created AI that will crush any human player in Go, Chess, Shogi, and Starcraft 2. OpenAI has made similar strides in complex strategy games, notably in Dota 2. The agents in these games all achieved mastery using deep reinforcement learning. Yet, this is only part of the story. What was the magic sauce that sent these systems' playing ability out of the atmosphere? A simple framework called self-play, where your opponent is yourself. Self-play is a framework where an agent learns to play a game by playing against itself.


Navigating the landscape of multiplayer games

#artificialintelligence

Multiplayer games have long been used as testbeds in artificial intelligence research, aptly referred to as the Drosophila of artificial intelligence. Traditionally, researchers have focused on using well-known games to build strong agents. This progress, however, can be better informed by characterizing games and their topological landscape. Tackling this latter question can facilitate understanding of agents and help determine what game an agent should target next as part of its training. Here, we show how network measures applied to response graphs of large-scale games enable the creation of a landscape of games, quantifying relationships between games of varying sizes and characteristics. We illustrate our findings in domains ranging from canonical games to complex empirical games capturing the performance of trained agents pitted against one another. Our results culminate in a demonstration leveraging this information to generate new and interesting games, including mixtures of empirical games synthesized from real world games. Multiplayer games can be used as testbeds for the development of learning algorithms for artificial intelligence. Omidshafiei et al. show how to characterize and compare such games using a graph-based approach, generating new games that could potentially be interesting for training in a curriculum.


Exploiting the Computational Power of the Graphics Card: Optimal State Space Planning on the GPU

Sulewski, Damian (TZI, Universität Bremen) | Edelkamp, Stefan (TZI, Universität Bremen) | Kissmann, Peter (TZI, Universität Bremen)

AAAI Conferences

In this paper optimal state space planning is parallelized by exploiting the processing power of a graphics card. The two exploration steps, namely selecting the actions to be applied and generating the successors, are performed on a graphics processing unit. Duplicate detection, however, is delayed to be executed on the central processing unit. Multiple cores are employed to bypass main memory latency. To increase processing speed for exact duplicate detection, the hash tables are lock-free. Moreover, a bucket-based representation enhances the concurrent distribution of frontier states. The planner supports cost-first exploration and is able to deal with a considerable fraction of current PDDL, including numerical state variables, complex objective functions, and goal preferences. It can maximize the net-benefit. Experimental findings show visible performance gains especially for larger benchmark problems.


AltAltp: Online Parallelization of Plans with Heuristic State Search

Sanchez, R., Kambhampati, S.

Journal of Artificial Intelligence Research

Despite their near dominance, heuristic state search planners still lag behind disjunctive planners in the generation of parallel plans in classical planning. The reason is that directly searching for parallel solutions in state space planners would require the planners to branch on all possible subsets of parallel actions, thus increasing the branching factor exponentially. We present a variant of our heuristic state search planner AltAlt, called AltAltp which generates parallel plans by using greedy online parallelization of partial plans. The greedy approach is significantly informed by the use of novel distance heuristics that AltAltp derives from a graphplan-style planning graph for the problem. While this approach is not guaranteed to provide optimal parallel plans, empirical results show that AltAltp is capable of generating good quality parallel plans at a fraction of the cost incurred by the disjunctive planners.